818 research outputs found

    Uncertainty quantification in medical image synthesis

    Get PDF
    Machine learning approaches to medical image synthesis have shown outstanding performance, but often do not convey uncertainty information. In this chapter, we survey uncertainty quantification methods in medical image synthesis and advocate the use of uncertainty for improving clinicians’ trust in machine learning solutions. First, we describe basic concepts in uncertainty quantification and discuss its potential benefits in downstream applications. We then review computational strategies that facilitate inference, and identify the main technical and clinical challenges. We provide a first comprehensive review to inform how to quantify, communicate and use uncertainty in medical synthesis applications

    Integrating the processes in the evolutionary system of domestication

    Get PDF
    Genetics has long been used as a source of evidence to understand domestication origins. A recent shift in the emphasis of archaeological evidence from a rapid transition paradigm of hunter-gatherers to agriculturalists, to a protracted transition paradigm has highlighted how the scientific framework of interpretation of genetic data was quite dependent on archaeological evidence, resulting in a period of discord in which the two evidence types appeared to support different paradigms. Further examination showed that the discriminatory power of the approaches employed in genetics was low, and framed within the rapid paradigm rather than testing it. In order to interpret genetic data under the new protracted paradigm it must be taken into account how that paradigm changes our expectations of genetic diversity. Preliminary examination suggests that a number of features that constituted key evidence in the rapid paradigm are likely to be interpreted very differently in the protracted paradigm. Specifically, in the protracted transition the mode and mechanisms involved in the evolution of the domestication syndrome have become much more influential in the shape of genetic diversity. The result is that numerous factors interacting over several levels of organization in a domestication system need to be taken into account in order to understand the evolution of the process. This presents a complex problem of integration of different data types which is difficult to describe formally. One possible way forward is to use Bayesian approximation approaches that allow complex systems to be measured in a way that does not require such formality

    Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution

    Get PDF
    In this work, we investigate the value of uncertainty modeling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.Comment: Accepted paper at MICCAI 201

    Foveation for Segmentation of Mega-Pixel Histology Images

    Get PDF
    Segmenting histology images is challenging because of the sheer size of the images with millions or even billions of pixels. Typical solutions pre-process each histology image by dividing it into patches of fixed size and/or down-sampling to meet memory constraints. Such operations incur information loss in the field-of-view (FoV) (i.e., spatial coverage) and the image resolution. The impact on segmentation performance is, however, as yet understudied. In this work, we first show under typical memory constraints (e.g., 10G GPU memory) that the trade-off between FoV and resolution considerably affects segmentation performance on histology images, and its influence also varies spatially according to local patterns in different areas (see Fig. 1). Based on this insight, we then introduce foveation module, a learnable “dataloader” which, for a given histology image, adaptively chooses the appropriate configuration (FoV/resolution trade-off) of the input patch to feed to the downstream segmentation model at each spatial location (Fig. 1). The foveation module is jointly trained with the segmentation network to maximise the task performance. We demonstrate, on the Gleason2019 challenge dataset for histopathology segmentation, that the foveation module improves segmentation performance over the cases trained with patches of fixed FoV/resolution trade-off. Moreover, our model achieves better segmentation accuracy for the two most clinically important and ambiguous classes (Gleason Grade 3 and 4) than the top performers in the challenge by 13.1% and 7.5%, and improves on the average performance of 6 human experts by 6.5% and 7.5%

    LEARNING TO DOWNSAMPLE FOR SEGMENTATION OF ULTRA-HIGH RESOLUTION IMAGES

    Get PDF
    Many computer vision systems require low-cost segmentation algorithms based on deep learning, either because of the enormous size of input images or limited computational budget. Common solutions uniformly downsample the input images to meet memory constraints, assuming all pixels are equally informative. In this work, we demonstrate that this assumption can harm the segmentation performance because the segmentation difficulty varies spatially (see Figure 1 “Uniform”). We combat this problem by introducing a learnable downsampling module, which can be optimised together with the given segmentation model in an end-to-end fashion. We formulate the problem of training such downsampling module as optimisation of sampling density distributions over the input images given their low-resolution views. To defend against degenerate solutions (e.g. over-sampling trivial regions like the backgrounds), we propose a regularisation term that encourages the sampling locations to concentrate around the object boundaries. We find the downsampling module learns to sample more densely at difficult locations, thereby improving the segmentation performance (see Figure 1 "Ours"). Our experiments on benchmarks of high-resolution street view, aerial and medical images demonstrate substantial improvements in terms of efficiency-and-accuracy trade-off compared to both uniform downsampling and two recent advanced downsampling techniques

    Spectral isolation of naturally reductive metrics on simple Lie groups

    Full text link
    We show that within the class of left-invariant naturally reductive metrics MNat⁥(G)\mathcal{M}_{\operatorname{Nat}}(G) on a compact simple Lie group GG, every metric is spectrally isolated. We also observe that any collection of isospectral compact symmetric spaces is finite; this follows from a somewhat stronger statement involving only a finite part of the spectrum.Comment: 19 pages, new title and abstract, revised introduction, new result demonstrating that any collection of isospectral compact symmetric spaces must be finite, to appear Math Z. (published online Dec. 2009

    Bayesian image quality transfer

    Get PDF
    Image quality transfer (IQT) aims to enhance clinical images of relatively low quality by learning and propagating high-quality structural information from expensive or rare data sets. However,the original framework gives no indication of confidence in its output,which is a significant barrier to adoption in clinical practice and downstream processing. In this article,we present a general Bayesian extension of IQT which enables efficient and accurate quantification of uncertainty,providing users with an essential prediction of the accuracy of enhanced images. We demonstrate the efficacy of the uncertainty quantification through super-resolution of diffusion tensor images of healthy and pathological brains. In addition,the new method displays improved performance over the original IQT and standard interpolation techniques in both reconstruction accuracy and robustness to anomalies in input images

    Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning

    Full text link
    Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.Comment: Early-accept at MICCAI 2018, 8 pages, 4 figure

    Multi-stage prediction networks for data harmonization

    Get PDF
    In this paper, we introduce multi-task learning (MTL) to data harmonization (DH); where we aim to harmonize images across different acquisition platforms and sites. This allows us to integrate information from multiple acquisitions and improve the predictive performance and learning efficiency of the harmonization model. Specifically, we introduce the Multi Stage Prediction (MSP) Network, a MTL framework that incorporates neural networks of potentially disparate architectures, trained for different individual acquisition platforms, into a larger architecture that is refined in unison. The MSP utilizes high-level features of single networks for individual tasks, as inputs of additional neural networks to inform the final prediction, therefore exploiting redundancy across tasks to make the most of limited training data. We validate our methods on a dMRI harmonization challenge dataset, where we predict three modern platform types, from one obtained from an old scanner. We show how MTL architectures, such as the MSP, produce around 20% improvement of patch-based mean-squared error over current state-of-the-art methods and that our MSP outperforms off-the-shelf MTL networks. Our code is availabl

    On the degrees of freedom of a semi-Riemannian metric

    Full text link
    A semi-Riemannian metric in a n-manifold has n(n-1)/2 degrees of freedom, i.e. as many as the number of components of a differential 2-form. We prove that any semi-Riemannian metric can be obtained as a deformation of a constant curvature metric, this deformation being parametrized by a 2-for
    • 

    corecore